42 research outputs found
Implications of stochastic ion channel gating and dendritic spine plasticity for neural information processing and storage
On short timescales, the brain represents, transmits, and processes information through
the electrical activity of its neurons. On long timescales, the brain stores information
in the strength of the synaptic connections between its neurons. This thesis examines
the surprising implications of two separate, well documented microscopic processes
— the stochastic gating of ion channels and the plasticity of dendritic spines — for
neural information processing and storage.
Electrical activity in neurons is mediated by many small membrane proteins called ion
channels. Although single ion channels are known to open and close stochastically,
the macroscopic behaviour of populations of ion channels are often approximated as
deterministic. This is based on the assumption that the intrinsic noise introduced by
stochastic ion channel gating is so weak as to be negligible. In this study we take advantage
of newly developed efficient computer simulation methods to examine cases
where this assumption breaks down. We find that ion channel noise can mediate spontaneous
action potential firing in small nerve fibres, and explore its possible implications
for neuropathic pain disorders of peripheral nerves. We then characterise the
magnitude of ion channel noise for single neurons in the central nervous system, and
demonstrate through simulation that channel noise is sufficient to corrupt synaptic integration,
spike timing and spike reliability in dendritic neurons.
The second topic concerns neural information storage. Learning and memory in the
brain has long been believed to be mediated by changes in the strengths of synaptic
connections between neurons — a phenomenon termed synaptic plasticity. Most
excitatory synapses in the brain are hosted on small membrane structures called dendritic
spines, and plasticity of these synapses is dependent on calcium concentration
changes within the dendritic spine. In the last decade, it has become clear that spines
are highly dynamic structures that appear and disappear, and can shrink and enlarge
on rapid timescales. It is also clear that this spine structural plasticity is intimately
linked to synaptic plasticity. Small spines host weak synapses, and large spines host
strong synapses. Because spine size is one factor which determines synaptic calcium
concentration, it is likely that spine structural plasticity influences the rules of synaptic
plasticity. We theoretically study the consequences of this observation, and find that
different spine-size to synaptic-strength relationships can lead to qualitative differences
in long-term synaptic strength dynamics and information storage. This novel theory
unifies much existing disparate data, including the unimodal distribution of synaptic
strength, the saturation of synaptic plasticity, and the stability of strong synapses
Adaptive Estimators Show Information Compression in Deep Neural Networks
To improve how neural networks function it is crucial to understand their
learning process. The information bottleneck theory of deep learning proposes
that neural networks achieve good generalization by compressing their
representations to disregard information that is not relevant to the task.
However, empirical evidence for this theory is conflicting, as compression was
only observed when networks used saturating activation functions. In contrast,
networks with non-saturating activation functions achieved comparable levels of
task performance but did not show compression. In this paper we developed more
robust mutual information estimation techniques, that adapt to hidden activity
of neural networks and produce more sensitive measurements of activations from
all functions, especially unbounded functions. Using these adaptive estimation
techniques, we explored compression in networks with a range of different
activation functions. With two improved methods of estimation, firstly, we show
that saturation of the activation function is not required for compression, and
the amount of compression varies between different activation functions. We
also find that there is a large amount of variation in compression between
different network initializations. Secondary, we see that L2 regularization
leads to significantly increased compression, while preventing overfitting.
Finally, we show that only compression of the last layer is positively
correlated with generalization.Comment: Accepted as a poster presentation at ICLR 2019 and reviewed on
OpenReview (available at https://openreview.net/forum?id=SkeZisA5t7). Pages:
11. Figures:
Neural circuit function redundancy in brain disorders
Redundancy is a ubiquitous property of the nervous system. This means that vastly different configurations of cellular and synaptic components can enable the same neural circuit functions. However, until recently, very little brain disorder research has considered the implications of this characteristic when designing experiments or interpreting data. Here, we first summarise the evidence for redundancy in healthy brains, explaining redundancy and three related sub-concepts: sloppiness, dependencies and multiple solutions. We then lay out key implications for brain disorder research, covering recent examples of redundancy effects in experimental studies on psychiatric disorders. Finally, we give predictions for future experiments based on these concepts
Signatures of Bayesian inference emerge from energy efficient synapses
Biological synaptic transmission is unreliable, and this unreliability likely
degrades neural circuit performance. While there are biophysical mechanisms
that can increase reliability, for instance by increasing vesicle release
probability, these mechanisms cost energy. We examined four such mechanisms
along with the associated scaling of the energetic costs. We then embedded
these energetic costs for reliability in artificial neural networks (ANN) with
trainable stochastic synapses, and trained these networks on standard image
classification tasks. The resulting networks revealed a tradeoff between
circuit performance and the energetic cost of synaptic reliability.
Additionally, the optimised networks exhibited two testable predictions
consistent with pre-existing experimental data. Specifically, synapses with
lower variability tended to have 1) higher input firing rates and 2) lower
learning rates. Surprisingly, these predictions also arise when synapse
statistics are inferred through Bayesian inference. Indeed, we were able to
find a formal, theoretical link between the performance-reliability cost
tradeoff and Bayesian inference. This connection suggests two incompatible
possibilities: evolution may have chanced upon a scheme for implementing
Bayesian inference by optimising energy efficiency, or alternatively, energy
efficient synapses may display signatures of Bayesian inference without
actually using Bayes to reason about uncertainty.Comment: 29 pages, 11 figure
Recommended from our members
Beyond excitation/inhibition imbalance in multidimensional models of neural circuit changes in brain disorders
A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits
Spontaneous action potentials and neural coding in unmyelinated axons
The voltage-gated Na and K channels in neurons are responsible for action potential generation. Because ion channels open and close in a stochastic fashion, spontaneous (ectopic) action potentials can result even in the absence of stimulation. While spontaneous action potentials have been studied in detail in single-compartment models, studies on spatially extended processes have been limited. The simulations and analysis presented here show that spontaneous rate in unmyelinated axon depends nonmonotonically on the length of the axon, that the spontaneous activity has sub-Poisson statistics, and that neural coding can be hampered by the spontaneous spikes by reducing the probability of transmitting the first spike in a train
Recommended from our members
The population tracking model:a simple, scalable statistical model for neural population data
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded ([Formula: see text]). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca[Formula: see text] and voltage imaging tools